en
每月不到10元,就可以无限制地访问最好的AIbase。立即成为会员
Home
News
Daily Brief
Income Guide
Tutorial
Tools Directory
Product Library
en
Search AI Products and News
Explore worldwide AI information, discover new AI opportunities
AI News
AI Tools
AI Cases
AI Tutorial
Type :
AI News
AI Tools
AI Cases
AI Tutorial
2023-08-18 16:21:20
.
AIbase
.
643
A Glitch That Allowed ChatGPT to Escape! Disordered Prompts Enable LLM to Rapidly Generate Ransomware, Jim Fan Stunned
Foreign netizens have discovered a new jailbreak technique that allows ChatGPT to generate ransomware using disordered prompts. By taking advantage of the human brain's ability to understand scrambled words and phrases, they bypass security filters. Jim Fan was amazed at the GPT model's comprehension of disordered words.